EN FR
EN FR
STARS - 2013
Software and Platforms
Bilateral Contracts and Grants with Industry
Bibliography
Software and Platforms
Bilateral Contracts and Grants with Industry
Bibliography


Section: New Results

Introduction

This year Stars has proposed new algorithms related to its three main research axes : perception for activity recognition, semantic activity recognition and software engineering for activity recognition.

Perception for Activity Recognition

Participants : Julien Badie, Slawomir Bak, Vasanth Bathrinarayanan, Piotr Bilinski, François Brémond, Guillaume Charpiat, Duc Phu Chau, Etienne Corvée, Carolina Garate, Vaibhav Katiyar, Ratnesh Kumar, Srinidhi Mukanahallipatna, Marco San Biago, Silviu Serban, Malik Souded, Kartick Subramanian, Anh Tuan Nghiem, Monique Thonnat, Sofia Zaidenberg.

This year Stars has extended an algorithm for tuning automatically the parameters of the people tracking algorithm. We have evaluated the algorithm for re-identification of people through a camera network while taking into account a large variety of potential features together with practical constraints. We have designed several original algorithms for the recognition of short actions and validated its performance on several benchmarking databases (e.g. ADL). We have also worked on video segmentation and representation, with different approaches and applications.

More precisely, the new results for perception for activity recognition concern:

  • Background Subtraction and People Detection in Videos ( 6.2 )

  • Tracking and Video Representation ( 6.3 )

  • Video segmentation with shape constraint ( 6.4 )

  • Articulating motion ( 6.5 )

  • Lossless image compression ( 6.6 )

  • People detection using RGB-D cameras ( 6.7 )

  • Online Tracking Parameter Adaptation based on Evaluation ( 6.8 )

  • People Detection, Tracking and Re-identification Through a Video Camera Network ( 6.9 )

  • People Retrieval in a Network of Cameras ( 6.10 )

  • Global Tracker : an Online Evaluation Framework to Improve Tracking Quality ( 6.11 )

  • Human Action Recognition in Videos ( 6.12 )

  • 3D Trajectories for Action Recognition Using Depth Sensors ( 6.13 )

  • Unsupervised Sudden Group Movement Discovery for Video Surveillance ( 6.14 )

  • Group Behavior Understanding ( 6.15 )

Semantic Activity Recognition

Participants : Guillaume Charpiat, Serhan Cosar, Carlos -Fernando Crispim Junior, Hervé Falciani, Baptiste Fosty, Qiao Ma, Rim Romdhane.

During this period, we have thoroughly evaluated the generic event recognition algorithm using both sensors (RGB and RGBD video cameras). This algorithm has been tested on more than 70 videos of older adults performing 15 min of physical exercises and cognitive tasks. In Paris subway, we have been able to demonstrate the recognition in live of group behaviours. We have also been able to store the meta-data (e.g. people trajectories) generated from the processing of 8 video cameras, each of them lasting 2 or 3 days. From these meta-data, we have automatically discovered few hundreds of rare events, such as loitering, collapsing, ... to display on the screen of subway security operators.

Concerning semantic activity recognition, the contributions are :

  • Evaluation of an Activity Monitoring System for Older People Using Fixed Cameras ( 6.16 )

  • A Framework for Activity Detection of Older People Using Multiple Sensors ( 6.17 )

  • Walking Speed Detection on a Treadmill using an RGB-D Camera ( 6.18 )

  • Serious Game for older adults with dementia ( 6.19 )

  • Unsupervised Activity Learning and Recognition ( 6.20 )

  • Extracting Statistical Information from Videos with Data Mining ( 6.21 )

Software Engineering for Activity Recognition

Participants : François Brémond, Daniel Gaffé, Julien Gueytat, Sabine Moisan, Anh Tuan Nghiem, Annie Ressouche, Jean-Paul Rigault, Luis-Emiliano Sanchez.

This year Stars has continued the development of the SUP platform. This latter is the backbone of the team experiments to implement the new algorithms. We continue to improve our meta-modelling approach to support the development of video surveillance applications based on SUP. This year we have focused on metrics to drive dynamic architecture changes and on component management. We continue the development of a scenario analysis module (SAM) relying on formal methods to support activity recognition in SUP platform. We improve the clem toolkit and we rely on it to build SAM. Finally, we are improving the way we perform adaptation in the definition of a multiple services for device adaptive platform for scenario recognition.

The contributions for this research axis are:

  • SUP ( 6.22 )

  • Model-Driven Engineering for Activity Recognition ( 6.23 )

  • Scenario Analysis Module ( 6.24 )

  • The Clem Workflow ( 6.25 )

  • Multiple Services for Device Adaptive Platform for Scenario Recognition ( 6.26 )